On-Time Flight Performance with Spark and Cosmos DB (Las Vegas)

On-Time Flight Performance Background

This notebook provides an analysis of On-Time Flight Performance and Departure Delays data using GraphFrames for Apache Spark.

Source Data:

References:

Spark to Cosmos DB Connector

Connecting Apache Spark to Azure Cosmos DB accelerates your ability to solve your fast moving Data Sciences problems where your data can be quickly persisted and retrieved using Azure Cosmos DB's DocumentDB API. With the Spark to Cosmos DB conector, you can more easily solve scenarios including (but not limited to) blazing fast IoT scenarios, update-able columns when performing analytics, push-down predicate filtering, and performing advanced analytics to data sciences against your fast changing data against a geo-replicated managed document store with guaranteed SLAs for consistency, availability, low latency, and throughput.

The Spark to Cosmos DB connector utilizes the Azure DocumentDB Java SDK will utilize the following flow:

The data flow is as follows:

  1. Connection is made from Spark master node to Cosmos DB gateway node to obtain the partition map. Note, user only specifies Spark and Cosmos DB connections, the fact that it connects to the respective master and gateway nodes is transparent to the user.
  2. This information is provided back to the Spark master node. At this point, we should be able to parse the query to determine which partitions (and their locations) within Cosmos DB we need to access.
  3. This information is transmitted to the Spark worker nodes ...
  4. Thus allowing the Spark worker nodes to connect directly to the Cosmos DB partitions directly to extract the data that is needed and bring the data back to the Spark partitions within the Spark worker nodes.
In [1]:
%%configure
{ "name":"Spark-to-Cosmos_DB_Connector", 
  "executorMemory": "8G", 
  "executorCores": 2, 
  "numExecutors": 2, 
  "driverCores": 2,
  "jars": ["wasb:///example/jars/0.0.3c/azure-documentdb-1.10.0.jar","wasb:///example/jars/0.0.3c/azure-cosmosdb-spark-0.0.3-SNAPSHOT.jar"],
  "conf": {
    "spark.jars.packages": "graphframes:graphframes:0.5.0-spark2.1-s_2.11",   
    "spark.jars.excludes": "org.scala-lang:scala-reflect"
   }
}
Current session configs: {u'kind': 'pyspark', u'name': u'Spark-to-Cosmos_DB_Connector', u'numExecutors': 2, u'conf': {u'spark.jars.packages': u'graphframes:graphframes:0.5.0-spark2.1-s_2.11', u'spark.jars.excludes': u'org.scala-lang:scala-reflect'}, u'executorCores': 2, u'driverCores': 2, u'jars': [u'wasb:///example/jars/0.0.3c/azure-documentdb-1.10.0.jar', u'wasb:///example/jars/0.0.3c/azure-cosmosdb-spark-0.0.3-SNAPSHOT.jar'], u'executorMemory': u'8G'}
IDYARN Application IDKindStateSpark UIDriver logCurrent session?
38application_1498709437567_0013sparkidleLinkLink
41application_1498709437567_0016sparkidleLinkLink
42application_1498709437567_0017sparkidleLinkLink
In [2]:
# Connection
flightsConfig = {
"Endpoint" : "https://doctorwho.documents.azure.com:443/",
"Masterkey" : "xWpfqUBioucC2YkWV6uHVhgZtsPIjIVmE4VDPyNYnw2QUazvCHm3rnn9AeSgglLOT3yfjCR5YbLeh5MCc3aKNw==",
"Database" : "DepartureDelays",
"preferredRegions" : "Central US",
"Collection" : "flights_pcoll", 
"SamplingRatio" : "1.0",
"schema_samplesize" : "1000",
"query_pagesize" : "2147483647",
"query_custom" : "SELECT c.date, c.delay, c.distance, c.origin, c.destination FROM c"
}
Starting Spark application
IDYARN Application IDKindStateSpark UIDriver logCurrent session?
52application_1498709437567_0027pysparkidleLinkLink✔
SparkSession available as 'spark'.
In [3]:
flights = spark.read.format("com.microsoft.azure.cosmosdb.spark").options(**flightsConfig).load()
flights.count()
flights.cache()
DataFrame[origin: string, distance: int, date: int, delay: int, destination: string]
In [4]:
flights.createOrReplaceTempView("flights")

Obtaining airport code information

In [5]:
# Set File Paths
airportsnaFilePath = "wasb://data@doctorwhostore.blob.core.windows.net/airport-codes-na.txt"

# Obtain airports dataset
airportsna = spark.read.csv(airportsnaFilePath, header='true', inferSchema='true', sep='\t')
airportsna.createOrReplaceTempView("airports")

Flights departing from Las Vegas

In [6]:
%%sql
select count(1) from flights where origin = 'LAS'
count(1)
0 33107

Top 10 Delayed Destinations originating from Las Vegas

In [7]:
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY TotalDelays DESC)-1), '. '), destination) as destination, TotalDelays
from (
select a.city as destination, sum(f.delay) as TotalDelays, count(1) as Trips
from flights f
join airports a
  on a.IATA = f.destination
where f.origin = 'LAS'
and f.delay > 0
group by a.city 
order by sum(delay) desc limit 10
) a
destination TotalDelays
0 0. San Francisco 37362
1 1. Los Angeles 37299
2 2. Denver 23966
3 3. Chicago 23073
4 4. Phoenix 20078
5 5. San Diego 17609
6 6. Houston 17354
7 7. Oakland 15565
8 8. Reno 14828
9 9. Burbank 14267

Calculate median delays by destination cities departing from Las Vegas

In [8]:
%%sql
select a.city as destination, percentile_approx(f.delay, 0.5) as median_delay
from flights f
join airports a
  on a.IATA = f.destination
where f.origin = 'LAS'
group by a.city 
order by percentile_approx(f.delay, 0.5)
destination median_delay
0 Anchorage -6.214286
1 Honolulu, Oahu -6.066667
2 Bellingham -5.650000
3 Palm Springs -5.181818
4 Fresno -5.142857
5 Long Beach -4.692308
6 Charlotte -3.918605
7 Seattle -3.365854
8 Washington DC -3.119048
9 Minneapolis -2.750000
10 Philadelphia -2.000000
11 Cincinnati -1.642857
12 Miami -1.352941
13 Dallas -1.309524
14 Los Angeles -1.308824
15 Detroit -0.980000
16 New York -0.670103
17 Atlanta -0.636364
18 Hartford -0.600000
19 Denver -0.561856
20 Newark -0.317073
21 Phoenix -0.205357
22 San Francisco -0.179167
23 Jacksonville 0.000000
24 Memphis 0.000000
25 Portland 0.194444
26 Salt Lake City 0.357143
27 Cleveland 0.500000
28 Chicago 0.728070
29 Houston 0.843750
... ... ...
43 Burbank 5.071429
44 Albuquerque 5.250000
45 Reno 5.327586
46 Nashville 5.583333
47 Indianapolis 5.666667
48 Flint 5.666667
49 San Diego 5.735294
50 Buffalo 5.833333
51 New Orleans 6.000000
52 Oakland 6.277778
53 Amarillo 6.750000
54 Boise 7.333333
55 Midland 7.437500
56 Des Moines 7.500000
57 Fort Lauderdale 7.666667
58 Austin 7.687500
59 Oklahoma City 7.875000
60 El Paso 8.727273
61 Tulsa 8.750000
62 Tucson 8.833333
63 Raleigh 9.071429
64 Spokane 9.625000
65 Ontario 9.916667
66 Little Rock 10.000000
67 Orlando 11.333333
68 Louisville 11.500000
69 Wichita 11.500000
70 Albany 13.500000
71 Birmingham 14.000000
72 Lubbock 15.500000

73 rows × 2 columns

Building up a GraphFrames

Using GraphFrames for Apache Spark to run degree and motif queries against Cosmos DB

In [9]:
# Build `departureDelays` DataFrame
departureDelays = spark.sql("select cast(f.date as int) as tripid, cast(concat(concat(concat(concat(concat(concat('2014-', concat(concat(substr(cast(f.date as string), 1, 2), '-')), substr(cast(f.date as string), 3, 2)), ' '), substr(cast(f.date as string), 5, 2)), ':'), substr(cast(f.date as string), 7, 2)), ':00') as timestamp) as `localdate`, cast(f.delay as int), cast(f.distance as int), f.origin as src, f.destination as dst, o.city as city_src, d.city as city_dst, o.state as state_src, d.state as state_dst from flights f join airports o on o.iata = f.origin join airports d on d.iata = f.destination") 

# Create Temporary View and cache
departureDelays.createOrReplaceTempView("departureDelays")
departureDelays.cache()
DataFrame[tripid: int, localdate: timestamp, delay: int, distance: int, src: string, dst: string, city_src: string, city_dst: string, state_src: string, state_dst: string]
In [10]:
# Note, ensure you have already installed the GraphFrames spack-package
import os
sc.addPyFile(os.path.expanduser('./graphframes_graphframes-0.5.0-spark2.1-s_2.11.jar'))
from pyspark.sql.functions import *
from graphframes import *

# Create Vertices (airports) and Edges (flights)
tripVertices = airportsna.withColumnRenamed("IATA", "id").distinct()
tripEdges = departureDelays.select("tripid", "delay", "src", "dst", "city_dst", "state_dst")

# Cache Vertices and Edges
tripEdges.cache()
tripVertices.cache()

# Create TripGraph
tripGraph = GraphFrame(tripVertices, tripEdges)

What flights departing LAS with the most significant average delays

Note, the joins are there to see the city name instead of the IATA codes. The rank() code is there to help order the data correctly when viewed in Jupyter notebooks.

In [11]:
flightDelays = tripGraph.edges.filter("src = 'LAS' and delay > 0").groupBy("src", "dst").avg("delay").sort(desc("avg(delay)"))
flightDelays.createOrReplaceTempView("flightDelays")
In [12]:
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY avg_delay DESC)-1), '. '), city) as destination, 
avg_delay
from (
select a.city, `avg(delay)` as avg_delay 
from flightDelays f
join airports a
on f.dst = a.iata
order by `avg(delay)` 
desc limit 10
) s
destination avg_delay
0 0. Honolulu, Oahu 73.121951
1 1. Albany 50.955556
2 2. Palm Springs 47.914894
3 3. Cincinnati 44.400000
4 4. Lubbock 43.015152
5 5. Anchorage 42.428571
6 6. San Francisco 41.102310
7 7. Wichita 40.351351
8 8. Long Beach 39.967213
9 9. Boston 37.847826

Which is the most important airport (in terms of connections)

It would take a relatively complicated SQL statement to calculate all of the edges to a single vertex, grouped by the vertices. Instead, we can use the graph degree method.

In [13]:
airportConnections = tripGraph.degrees.sort(desc("degree"))
airportConnections.createOrReplaceTempView("airportConnections")
In [14]:
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY degree DESC)-1), '. '), city) as destination, 
degree
from (
select a.city, f.degree 
from airportConnections f 
join airports a
  on a.iata = f.id
order by f.degree desc 
limit 10
) a
destination degree
0 0. Atlanta 179774
1 1. Dallas 133966
2 2. Chicago 125405
3 3. Los Angeles 106853
4 4. Denver 103699
5 5. Houston 85685
6 6. Phoenix 79672
7 7. San Francisco 77635
8 8. Las Vegas 66101
9 9. Charlotte 56103

Are there direct flights between Seattle and San Jose?

In [15]:
filteredPaths = tripGraph.bfs(
    fromExpr = "id = 'SEA'",
    toExpr = "id = 'SJC'",
    maxPathLength = 1)
filteredPaths.show()
+--------------------+--------------------+--------------------+
|                from|                  e0|                  to|
+--------------------+--------------------+--------------------+
|[Seattle,WA,USA,SEA]|[1010600,-2,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1012030,-4,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1011215,-6,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1011855,-3,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1010710,-1,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1020600,2,SEA,SJ...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1022030,-3,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1021600,-2,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1021215,-9,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1021855,-1,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1020710,-9,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1030600,-5,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1032030,-1,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1031600,-7,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1031215,-3,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1031855,-1,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1030710,-5,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1040600,4,SEA,SJ...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1042030,-2,SEA,S...|[San Jose,CA,USA,...|
|[Seattle,WA,USA,SEA]|[1041215,-4,SEA,S...|[San Jose,CA,USA,...|
+--------------------+--------------------+--------------------+
only showing top 20 rows

But are there any direct flights between San Jose and Buffalo?

  • Try maxPathLength = 1 which means one edge (i.e. one flight) between SJC and BUF, i.e. direct flight
  • Try maxPathLength = 2 which means two edges between SJC and BUF, i.e. all the different variations of flights between San Jose and Buffalo with only one stop oever in between?
In [16]:
filteredPaths = tripGraph.bfs(
  fromExpr = "id = 'SJC'",
  toExpr = "id = 'BUF'",
  maxPathLength = 1)
filteredPaths.show()
+----+-----+-------+---+
|City|State|Country| id|
+----+-----+-------+---+
+----+-----+-------+---+
In [17]:
filteredPaths = tripGraph.bfs(
  fromExpr = "id = 'SJC'",
  toExpr = "id = 'BUF'",
  maxPathLength = 2)
filteredPaths.show()
+--------------------+--------------------+-------------------+--------------------+--------------------+
|                from|                  e0|                 v1|                  e1|                  to|
+--------------------+--------------------+-------------------+--------------------+--------------------+
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1010635,-6,BOS,B...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1011059,13,BOS,B...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1011427,19,BOS,B...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1020635,-4,BOS,B...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1021059,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1021427,194,BOS,...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1030635,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1031059,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1031427,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1040635,16,BOS,B...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1041552,96,BOS,B...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1050635,1,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1051059,48,BOS,B...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1051427,443,BOS,...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1060635,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1061059,294,BOS,...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1061427,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1070730,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1071730,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
|[San Jose,CA,USA,...|[1012124,16,SJC,B...|[Boston,MA,USA,BOS]|[1080710,0,BOS,BU...|[Buffalo,NY,USA,BUF]|
+--------------------+--------------------+-------------------+--------------------+--------------------+
only showing top 20 rows

In that case, what is the most common transfer point between San Jose and Buffalo?

In [18]:
commonTransferPoint = filteredPaths.groupBy("v1.id", "v1.City").count().orderBy(desc("count"))
commonTransferPoint.createOrReplaceTempView("commonTransferPoint")
In [19]:
%%sql
select concat(concat((dense_rank() OVER (PARTITION BY 1 ORDER BY Trips DESC)-1), '. '), city) as destination, 
Trips
degree
from (
select City, `count` as Trips from commonTransferPoint order by Trips desc limit 10
) a
destination degree
0 0. Las Vegas 107442
1 1. Chicago 87696
2 2. Phoenix 76770
3 3. New York 31968
4 4. Atlanta 28910
5 5. Chicago 20060
6 6. Boston 1488
7 7. Minneapolis 164

Predicting Flight Delays

Extending upon analysis we have done up to this point, can we also predict if a flight will be delayed, on-time, or early based on the available data.

Prepare the Dataset

The first thing we will do is to cleanse the data and apply some labels to our information (e.g. early, on-time, delayed). As well, we will want to remove any rows with NULL values.

In [20]:
# This contains a generated mapping between tripid and airline
#   You can get the file at https://github.com/dennyglee/databricks/blob/master/misc/trip_airline_map.csv
#   For this example, the trip_airline_map.csv file has been pushed to in my mounted bucket.
#tripAirlineMapFilePath = "wasb://data@doctorwhostore.blob.core.windows.net/trip_airline_map.csv"
#tripAirlineMap = spark.read.csv(tripAirlineMapFilePath, sep=",", header=True)
#tripAirlineMap.createOrReplaceTempView("tripAirlineMap")
In [21]:
# Prep dataset
#flightML = spark.sql("select cast(distance as double) as distance, src as origin, state_src as origin_state, dst as destination, state_dst as destination_state, concat(concat(concat(cast(tripid as string), src), dst), cast((delay + 2000) as string)) as trip_identifier, case when delay = 0 then 'on-time' when delay < 0 then 'early' else 'delayed' end as flight_status from departureDelays")
flightML = spark.sql("select cast(distance as double) as distance, src as origin, state_src as origin_state, dst as destination, state_dst as destination_state, concat(concat(concat(cast(tripid as string), src), dst), cast((delay + 2000) as string)) as trip_identifier, case when delay = 0 then 'on-time' else 'delayed' end as flight_status from departureDelays where src IN ('LAS', 'SEA')")
flightML = flightML.dropna().dropDuplicates()
flightML.createOrReplaceTempView("flightML")
In [22]:
# Join flights and airline information
#dataset = spark.sql("select f.distance, f.origin, f.origin_state, f.destination, f.destination_state, f.trip_identifier, f.flight_status, m.airline from flightML f join tripAirlineMap m on m.trip_identifier = f.trip_identifier")
dataset = flightML
cols = dataset.columns
In [23]:
dataset.printSchema()
root
 |-- distance: double (nullable = true)
 |-- origin: string (nullable = true)
 |-- origin_state: string (nullable = true)
 |-- destination: string (nullable = true)
 |-- destination_state: string (nullable = true)
 |-- trip_identifier: string (nullable = true)
 |-- flight_status: string (nullable = false)

Building ML Pipeline

Before we can run our various models against this data, we will first need to vectorize our data via One-Hot Encorder (for category data), String Indexer (create an index based on our labelled values), and Vector Assembler.

In [24]:
# One-Hot Encoding
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler

#categoricalColumns = ["origin", "origin_state", "destination", "destination_state", "trip_identifier", "airline"]
categoricalColumns = ["origin", "origin_state", "destination", "destination_state", "trip_identifier"]
stages = [] # stages in our Pipeline
for categoricalCol in categoricalColumns:
  # Category Indexing with StringIndexer
  stringIndexer = StringIndexer(inputCol=categoricalCol, outputCol=categoricalCol+"Index")
  
  # Use OneHotEncoder to convert categorical variables into binary SparseVectors
  encoder = OneHotEncoder(inputCol=categoricalCol+"Index", outputCol=categoricalCol+"classVec")
  
  # Add stages.  These are not run here, but will run all at once later on.
  stages += [stringIndexer, encoder]

# Convert label into label indices using the StringIndexer
label_stringIdx = StringIndexer(inputCol = "flight_status", outputCol = "label")
stages += [label_stringIdx]

# Transform all features into a vector using VectorAssembler
numericCols = ["distance"]
assemblerInputs = map(lambda c: c + "classVec", categoricalColumns) + numericCols
assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")
stages += [assembler]
In [25]:
# Create a Pipeline.
pipeline = Pipeline(stages=stages)
# Run the feature transformations.
#  - fit() computes feature statistics as needed.
#  - transform() actually transforms the features.
pipelineModel = pipeline.fit(dataset)
dataset = pipelineModel.transform(dataset)

# Keep relevant columns
selectedcols = ["label", "features"] + cols
dataset = dataset.select(selectedcols)
dataset.show()
+-----+--------------------+--------+------+------------+-----------+-----------------+-----------------+-------------+
|label|            features|distance|origin|origin_state|destination|destination_state|  trip_identifier|flight_status|
+-----+--------------------+--------+------+------------+-----------+-----------------+-----------------+-------------+
|  0.0|(55484,[55,90,515...|   790.0|   SEA|          WA|        JNU|               AK|2181120SEAJNU1995|      delayed|
|  0.0|(55484,[0,1,42,10...|  1192.0|   LAS|          NV|        STL|               MO|3010715LASSTL1996|      delayed|
|  0.0|(55484,[0,1,70,11...|   857.0|   LAS|          NV|        ICT|               KS|3291310LASICT2014|      delayed|
|  0.0|(55484,[0,1,6,85,...|   917.0|   LAS|          NV|        DFW|               TX|1140600LASDFW1990|      delayed|
|  0.0|(55484,[0,1,6,85,...|   917.0|   LAS|          NV|        DFW|               TX|1260800LASDFW1990|      delayed|
|  0.0|(55484,[6,85,1192...|  1442.0|   SEA|          WA|        DFW|               TX|1211115SEADFW1996|      delayed|
|  0.0|(55484,[32,84,363...|   650.0|   SEA|          WA|        FAT|               CA|1182015SEAFAT1990|      delayed|
|  0.0|(55484,[0,1,6,85,...|   917.0|   LAS|          NV|        DFW|               TX|2130945LASDFW1998|      delayed|
|  0.0|(55484,[0,1,6,85,...|   917.0|   LAS|          NV|        DFW|               TX|2250605LASDFW1994|      delayed|
|  0.0|(55484,[0,1,6,85,...|   917.0|   LAS|          NV|        DFW|               TX|3131305LASDFW1996|      delayed|
|  0.0|(55484,[0,1,6,85,...|   917.0|   LAS|          NV|        DFW|               TX|3220750LASDFW1996|      delayed|
|  0.0|(55484,[0,1,32,84...|   225.0|   LAS|          NV|        FAT|               CA|3101722LASFAT2024|      delayed|
|  0.0|(55484,[0,1,32,84...|   225.0|   LAS|          NV|        FAT|               CA|3280917LASFAT1993|      delayed|
|  0.0|(55484,[6,85,1277...|  1442.0|   SEA|          WA|        DFW|               TX|3231550SEADFW1995|      delayed|
|  0.0|(55484,[20,90,768...|  1259.0|   SEA|          WA|        ANC|               AK|1042350SEAANC2043|      delayed|
|  0.0|(55484,[20,90,475...|  1259.0|   SEA|          WA|        ANC|               AK|1082130SEAANC2003|      delayed|
|  1.0|(55484,[20,90,738...|  1259.0|   SEA|          WA|        ANC|               AK|1152015SEAANC2000|      on-time|
|  0.0|(55484,[20,90,150...|  1259.0|   SEA|          WA|        ANC|               AK|1161815SEAANC2028|      delayed|
|  0.0|(55484,[12,84,544...|   605.0|   SEA|          WA|        SJC|               CA|1202030SEASJC1999|      delayed|
|  0.0|(55484,[0,1,12,84...|   336.0|   LAS|          NV|        SJC|               CA|2040600LASSJC1999|      delayed|
+-----+--------------------+--------+------+------------+-----------+-----------------+-----------------+-------------+
only showing top 20 rows

Randomly split data into training and test datasets

  • Set the seed for reproducibility
In [31]:
(trainingData, testData) = dataset.randomSplit([0.7, 0.3], seed = 100)

Logistic Regression

Let's try using logistic regression to see if we can accurately predict if a flight will be delayed.

  • First, we will train the data using Logistic Regression
  • Next we will run that model against the testData
In [32]:
from pyspark.ml.classification import LogisticRegression

# Create initial LogisticRegression model
lr = LogisticRegression(labelCol="label", featuresCol="features", maxIter=10)

# Train model with Training Data
lrModel = lr.fit(trainingData)
In [33]:
# Make predictions on test data using the transform() method.
# LogisticRegression.transform() will only use the 'features' column.
predictions = lrModel.transform(testData)

View LR Model's predictions

  • Recall, label is the actual test value, prediction is the predicted value
    • where 0 - delayed, 1 - early, 2 - on-time
In [34]:
selected = predictions.select("label", "prediction", "probability", "flight_status", "destination", "destination_state").where("destination = 'SEA'")
selected.show()
+-----+----------+--------------------+-------------+-----------+-----------------+
|label|prediction|         probability|flight_status|destination|destination_state|
+-----+----------+--------------------+-------------+-----------+-----------------+
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
|  0.0|       0.0|[0.99977000354706...|      delayed|        SEA|               WA|
+-----+----------+--------------------+-------------+-----------+-----------------+
only showing top 20 rows

Use BinaryClassificationEvaluator to evaluate our model

In [35]:
from pyspark.ml.evaluation import BinaryClassificationEvaluator

# Evaluate model
evaluator = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
evaluator.evaluate(predictions)
0.532336654872024
In [ ]: